A New Methodology for Calculating Distributions of Reward Accumulated During a Finite Interval
نویسندگان
چکیده
Markov reward models are an important formalism by which to obtain dependability and performability measures of computer systems and networks. In this context, it is particularly important to determine the probability distribution function of the reward accumulated during a nite interval. The interval may correspond to the mission period in a mission-critical system , the time between scheduled maintenances, or a warranty period. In such models, changes in state correspond to changes in system structure (due to faults and repairs), and the reward structure depends on the measure of interest. For example, the reward rates may represent a productivity rate while in that state, if performability is considered, or the binary values zero and one, if interval availability is of interest. This paper presents a new methodology to calculate the distribution of reward accumulated over a nite interval. In particular, we derive recursive expressions for the distribution of reward accumulated given that a particular sequence of state changes occurs during the interval, and we explore paths one at a time. The expressions for conditional accumulated reward are new and are numerically stable. In addition, by exploring paths individually, we avoid the memory growth problems experienced when applying previous approaches to large models. The utility of the methodology is illustrated via application to a realistic fault-tolerant multiprocessor model with over half a million states.
منابع مشابه
Asymptotic Distributions of Estimators of Eigenvalues and Eigenfunctions in Functional Data
Functional data analysis is a relatively new and rapidly growing area of statistics. This is partly due to technological advancements which have made it possible to generate new types of data that are in the form of curves. Because the data are functions, they lie in function spaces, which are of infinite dimension. To analyse functional data, one way, which is widely used, is to employ princip...
متن کاملAlgorithms for Computing Limit distributions of Oscillating Systems with Finite Capacity
We address the batch arrival systems with finite capacity under partial batch acceptance strategy where service times or rates oscillate between two forms according to the evolution of the number of customers in the system. Applying the theory of Markov regenerative processes and resorting to Markov chain embedding, we present a new algorithm for computing limit distributions of the number cus...
متن کاملTracking Interval for Doubly Censored Data with Application of Plasma Droplet Spread Samples
Doubly censoring scheme, which includes left as well as right censored observations, is frequently observed in practical studies. In this paper we introduce a new interval say tracking interval for comparing the two rival models when the data are doubly censored. We obtain the asymptotic properties of maximum likelihood estimator under doubly censored data and drive a statistic for testing the ...
متن کاملMarkov Reward Processes: a Final Report
Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the behavior of the system with a continuous-time Markov chain, where a reward rate is associated with each state. In a reliability/availability model, upstates may have reward rate 1 and down states may have reward rate zero associated with them. In a queuei...
متن کاملA model for calculating motivational payments to employees using fuzzy logic
The human resources employed by organizations are the main source of funds for managers, and HRM is the most important and perhaps most important task of managers. Behavior and activity of humans is due to their motives or needs. The issue of allocating fair rewards to those who have a motivational aspect has always been a concern for the managers of the organization. In this paper, we have tri...
متن کامل